Metalua

Metalua is a metaprogramming extension to Lua 5.1, i.e. a compiler which allows to extend the language's syntax and semantics from within programs

The subtitle for the homepage is: "Lisp extensibility, Lua syntax"

The two "samples" are rather impressive:

Typechecking introduces optional runtime type checking into Lua. It features syntax extension (addition of a typing operator), existing syntax modification (functions get their parameters and returned values checked), and non-local code modification (return statements in functions are chased down to get additional typechecking code).

Pattern matching demonstrates how to add pattern matching, a distinctive feature of languages from the ML family, into Lua. These complex testing control statements are compiled into a (possibly long) list of if statements and local variables bindings. They are supported by a syntax extension which shows how to save a lot of work by reusing Lua's expression syntax with a very different semantics.

Note for ML programmers: I feared that pattern matching without type inference would be hard to use; it's actually quite pleasant, and I hadn't many issues with the lack of static typing.

Kourier is now live

I just stumbled across the February 28 development-blog posting from the commercial online game Vendetta Online:

The new erlang based system is now in production. For those who haven't been following, we ran into problems with our existing Lisp-based system (named "Deliverator") which handles high-level AI behaviour.. large groups of NPCs, large battles and the like. Over the last couple of months, we've been in the process of migrating to a much more scalable architecture (named "Kourier") based on Erlang, an elegant distributed-programming platform. ...

I do enjoy starting the day with a nice cup of coffee and an interesting read like this. :-)

Propositions as [Types]

Propositions as [Types], Steve Awodey and Adrej Bauer.

Image factorizations in regular categories are stable under pullbacks, so they model a natural modal operator in dependent type theory. This unary type constructor [A] has turned up previously in a syntactic form as a way of erasing computational content, and formalizing a notion of proof irrelevance. Indeed, semantically, the notion of a support is sometimes used as surrogate proposition asserting inhabitation of an indexed family.

We give rules for bracket types in dependent type theory and provide complete semantics using regular categories. We show that dependent type theory with the unit type, strong extensional equality types, strong dependent sums, and bracket types is the internal type theory of regular categories, in the same way that the usual dependent type theory with dependent sums and products is the internal type theory of locally cartesian closed categories.

We also show how to interpret first-order logic in type theory with brackets, and we make use of the translation to compare type theory with logic. Specifically, we show that the propositions-as-types interpretation is complete with respect to a certain fragment of intuitionistic first-order logic. As a consequence, a modified double-negation translation into type theory (without bracket types) is complete for all of classical first-order logic.

I figure I should say a few words about why proof irrelevance is an interesting idea. In math, you often want to consider elements of a certain type that have certain properties -- for example, you might want the set of satisfiable boolean formulas as part of a proof as a hypothesis of a theorem.

The way you might represent this in dependent type theory is by giving a pair, consisting of a formula and a satisfying assignment. You might write this as sigma f:Formula. assignment(f). The trouble is that the elements of this type really have more information than the subset contains; in addition to having formulas, you also have the assignments. So if you wanted to write a function that took values of this type computed satisfying assignments, then you could do it in constant time by just returning the second component of the pair, the evidence that it was satisfiable. This is not what you want, if your goal was to show that you could compute the satisfying assignment to a satisfiable formula.

One way of addressing this problem is to treat the second component of the pair as a proof irrelevant type. The basic idea is that for each type A, you make up a new type [A] which has either zero or one inhabitants, depending on whether A has any inhabitants or not. This means that you cannot make any computational use of data of type [A], because it doesn't have any structure to it. So if your solver receives a value of type sigma f:Formula. [assignment(f)], then you know that the formula is satisfiable, but you don't know what it is, because the second component is a bracket type. This means that your function will actually have to do the real work you intended.

I should also add that this is an old idea in type theory; this is the paper that made it clear to me.

(Thanks to Bob Harper for suggesting clarifications to my comments.)

Ed Felten: Why Understanding Programs is Hard

In contrast to his earlier attempts, which I appreciated, this time Felten does not try to prove his point by explaining the fundamental facts of computer science. Understading programs is hard, he demonstrates here, because the results of some mathematical functions are hard to predict (in his example, a secure hash function).

While this enables Felten to give a short and easy to understand example, I think he misses the chance to explain why understanding (and predicting) program behaviour is hard in general. This would bring him back to the fundamental principles he discussed before, stemming from the halting problem. By going this route it would be possible to relate the issue under discussion to topics that are more directly related to LtU: how programming languages can help make analyzing program behaviour easier, how languages can restrict possible behaviours etc.

Be that as it may, more people outside the profession should be come to appreciate that understanding program behaviour is hard. This is something that takes time to appreciate fully, even when you are a programmer. It is one thing to know that finding bugs is hard in practice and quite another to appreciate the reasons why it is inherently so.

Finite State Machines in Forth

No matter what your opinion of Forth, this is an enticing paper. J.V. Noble takes an interesting problem from rough implementation through an elegant solution which takes advantage of the compile-time features of Forth. There aren't many easily accessible papers that show how beautiful Forth can be, so this is worth taking some time to understand, especially if you've a fondness for stack-based languages.

On the fact that the Atlantic Ocean has two sides

EWD611, Edsger Dijkstra, 1976

My subject should be very simple, for it is only the difference between the orientations of computing science at two sides of the Atlantic Ocean.

I chased this up while looking for Alan Kay's rebuttal "On the fact that most software is written on one side of the Atlantic Ocean" which I couldn't find. I was struck by these passages:

My first visit to the USA, in 1963, was the result of an amazing invitation from the ACM. Without the obligation to present a paper I was asked to attend —as "invited participant", so to speak— a three-day conference in Princeton: for the opportunity of having me sitting in the audience and participating in the discussions, my hosts were willing to pay my expenses, travel included! As you can imagine, I felt quite elated, but shortly after the conference had started, I was totally miserable: the first speaker gave a most impressive talk with wall-to-wall formulae and displayed a mastery of elaborate syntax theory, of which I had not even suspected the existence! I could only understand the first five minutes of his talk, and realized that I was only a poor amateur, sitting in the audience on false pretences.

I skipped lunch, walking around all by myself, trying to make out what the first speaker had told us. I got vaguely funny feelings, but it was only during the cocktail party that evening, when I had recovered enough to dare to consider that it had all been humbug. Tentatively I transmitted my doubts to one of the other participants. He was amused by my innocence. Didn't I know that the first performer was a complete bogus speaker? Of course it was all humbug, everybody in the audience knew that! Puzzled I asked him why the man had been invited and why, at the end, some of the participants had even faked a discussion. "Oh, on occasions like that, we just go through the motions. IBM is one of the sponsors of this conference, so we had to accept an IBM speaker. He was given the first slot, because the sooner that is over, the better." I was flabbergasted.

This vividly reminds me of the first time I attended an ACM conference on programming languages! Though it's amazing to think that back in the golden age there would be just one bogus talk :-)

The Computer Revolution Hasn't Happened Yet

Google video recording of Alan Kay's OOPSLA 1997 keynote. Very good!

Actors that Unify Threads and Events

In this paper we present an abstraction of actors that combines the benefits of thread-based and event-based concurrency. Threads support blocking operations such as system I/O, and can be executed on multiple processor cores in parallel. Event-based computation, on the other hand, is more lightweight and scales to large numbers of actors. We also present a set of combinators that allows a flexible composition of these actors.

Philipp Haller and Martin Odersky also wrote Event-Based Programming without Inversion of Control which was discusssed here last year. You may also remember A Language-based Approach to Unifying Events and Threads (Li and Zdancewic).

DanFest 2004 videos online

I am pleased to announce that twenty-one talks from Dan Friedman's 60th birthday festschrift are now on Google Video. You can find links to the talks on the official DanFest home page. Every LtU reader should find at least several talks of interest:

  • Andrew Hanson: Welcome to DanFest
  • Mitch Wand: Relating models of backtracking
  • Lynn Winebarger: Extending Scheme for bottom-up relational programming
  • Bil Lewis: Debugging backwards in time
  • Steve Ganz: Monadic Encapsulation of State
  • Kent Dybvig: The guaranteed optimization clause of the macro-writer's bill of rights
  • David Wise: Introduction for Guy Steele's keynote address
  • Guy Steele: Dan Friedman: Cool Ideas (Keynote address)
  • Olin Shivers: The anatomy of a loop: a story of scope and control
  • Kevin Millikin: Obfuscating transformations via abstract interpretation
  • Bob Filman: Poetry in programs: A brief examination of software aesthetics, including some observations on the history of programming styles and some speculations on post-object programming
  • Gerald Jay Sussman: The role of programming in the formulation of ideas
  • Anurag Mendhekar: Aspect-oriented programming in the real world
  • Shriram Krishnamurthi: Verification of web programs
  • Jim Marshall: Introductory cognitive science course using Scheme
  • Rhys Price Jones: DNA analysis
  • Oleg Kiselyov: Normal-order syntax-rules and proving the fix-point of call/cc
  • Julia Lawall: On designing a target-independent DSL for safe OS process scheduling components
  • Kathi Fisler: Aspect verification using model checking
  • Jonathan Sobel: Implementing Categorical Semantics
  • Matthias Felleisen and Robby Findler: An investigation of contracts as projections

DanFest was previously mentioned on LtU. In fact, Ehud wrote, "I sure wish that [Guy Steele's] keynote was available online somewhere.". Not only will you enjoy the keynote, Ehud, but you might even find a talk given in the style of The Little Schemer. :-)

In addition to watching the talks online, you can download the videos to your computer and watch them at a higher resolution with Google Video Player.

Enjoy!

--Will

Using Category Theory to Design Implicit Conversions and Generic Operators

Using Category Theory to Design Implicit Conversions and Generic Operators, John C. Reynolds, 1980. (Try this if the official link doesn't work.)

A generalization of many-sorted algebras, called category-sorted algebras, is defined and applied to the language design problem of avoiding anomalies in the interaction of implicit conversions and generic operators. The definition of a simple imperative language (without any binding mechanisms) is used as an example.

This is an old, but still cute, paper. The basic intuition is that a good design principle for a language with implicit conversions is that whatever order of conversions the language takes, you should get the same result. He then formalizes that by giving a category of types and conversions, and demanding that everything commute properly. And these give just the conditions the language designer has to check to make sure that he or she hasn't screwed anything up.

Someone could probably get a fun little paper by taking this idea and shooting some dependent types into it. (Maybe somebody already has?) If you've got an abstract type, a coercion function, and a proof that it satisfies Reynolds' conditions, now your compiler can silently insert those coercions for you as needed, but you can still be sure that it won't mess up the meaning of your program.